Learning Ergonomic Control in Human–Robot Symbiotic Walking

نویسندگان

چکیده

This article presents an imitation learning strategy for extracting ergonomically safe control policies in physical human–robot interaction scenarios. The presented approach seeks to proactively reduce the risk of injuries and musculoskeletal disorders by anticipating ergonomic effects a robot's actions on human partner, e.g., how ankle angle prosthesis affects future knee torques user. To this end, we extend ensemble Bayesian primitives enable prediction latent biomechanical variables. methodology yields reactive strategy, which evaluate assisted walking task with robotic lower limb prosthesis. Building upon learned primitives, also present model-predictive (MPC) that actively steers toward movement regimes. We compare introduced strategies highlight framework's ability generate ergonomic, biomechanically assistive prosthetic control. A rich analysis constrained MPC shows 20× reduction large perturbations system. empirically demonstrate 16% vertical reaction forces real-world jumping experiments utilizing our examine other optimal simulated experiments.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Q Learning based Reinforcement Learning Approach to Bipedal Walking Control

Reinforcement learning has been active research area not only in machine learning but also in control engineering, operation research and robotics in recent years. It is a model free learning control method that can solve Markov decision problems. Q-learning is an incremental dynamic programming procedure that determines the optimal policy in a step-by-step manner. It is an online procedure for...

متن کامل

Adaptive Sensor-Driven Neural Control for Learning in Walking Machines

Wild rodents learn the danger-predicting meaning of predator bird calls through the paring of cues which are an aversive stimulus (immediate danger signal or unconditioned stimulus, US) and the acoustic stimulus (predator signal or conditioned stimulus, CS). This learning is a form of pavlovian conditioning. In analogy, in this article a setup is described where adaptive sensor-driven neural co...

متن کامل

emittance control in high power linacs

چکیده این پایان نامه به بررسی اثر سیم پیچ مغناطیسی و کاوه یِ خوشه گر با بسامد رادیویی بر هاله و بیرونگراییِ باریکه هایِ پیوسته و خوشه ایِ ذرات باردار در شتابدهنده های خطیِ یونی، پروتونی با جریان بالا می پردازد و راه حل هایی برای بهینه نگهداشتن این کمیتها ارایه می دهد. بیرونگرایی یکی از کمیتهای اساسی باریکه هایِ ذرات باردار در شتابدهنده ها است که تاثیر قابل توجهی بر قیمت، هزینه و کاراییِ هر شتابدهند...

cient Reinforcement Learning through Symbiotic

This article presents a novel reinforcement learning method called SANE (Symbiotic, Adaptive Neuro-Evolution), which evolves a population of neurons through genetic algorithms to form a neural network capable of performing a task. Symbiotic evolution promotes both cooperation and specialization, which results in a fast, eecient genetic search and prevents convergence to subopti-mal solutions. I...

متن کامل

Control Algorithm For Biped Walking Using Reinforcement Learning

The work is concerned with the integrated dynamic control of humanoid locomotion mechanisms based on the spatial dynamic model of humanoid mechanism. The control scheme was synthesized using the centralized model with proposed structure of dynamic controller that involves two feedback loops: position-velocity feedback of the robotic mechanism joints and reinforcement learning feedback around Ze...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Robotics

سال: 2023

ISSN: ['1552-3098', '1941-0468', '1546-1904']

DOI: https://doi.org/10.1109/tro.2022.3192779